275 resultados para New sequencing methods

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaluation, selection and finally decision making are all among important issues, which engineers face in long run of projects. Engineers implement mathematical and nonmathematical methods to make accurate and correct decisions, whenever needed. As extensive as these methods are, effects of any selected method on outputs achieved and decisions made are still suspicious. This is more controversial and challengeable, where evaluation is made among non-quantitative alternatives. In civil engineering and construction management problems, criteria include both quantitative and qualitative ones, such as aesthetic, construction duration, building and operation costs, and environmental considerations. As the result, decision making frequently takes place among non-quantitative alternatives. It should be noted that traditional comparison methods, including clear-cut and inflexible mathematics, have always been criticized. This paper demonstrates a brief review of traditional methods of evaluating alternatives. It also offers a new decision making method using, fuzzy calculations. The main focus of this research is some engineering issues, which have flexible nature and vague borders. Suggested method provides analyzability of evaluation for decision makers. It is also capable to overcome multi criteria and multi-referees problems. In order to ease calculations, a program named DeMA is introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next-generation sequencing techniques have revolutionized over the last decade providing researchers with low cost, high-throughput alternatives compared to the traditional Sanger sequencing methods. These sequencing techniques have rapidly evolved from first-generation to fourth-generation with very broad applications such as unravelling the complexity of the genome, in terms of genetic variations, and having a high impact on the biological field. In this review, we discuss the transition of sequencing from the second-generation to the third- and fourth-generations, and describe some of their novel biological applications. With the advancement in technology, the earlier challenges of minimal size of the instrument, flexibility of throughput, ease of data analysis and short run times are being addressed. However, the need for prospective analysis and effectiveness to test whether the knowledge of any given new variants identified has an effect on clinical outcome may need improvement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This case-study explores alternative and experimental methods of research data acquisition, through an emerging research methodology, ‘Guerrilla Research Tactics’ [GRT]. The premise is that the researcher develops covert tactics for attracting and engaging with research participants. These methods range between simple analogue interventions to physical bespoke artefacts which contain an embedded digital link to a live, interactive data collecting resource, such as an online poll, survey or similar. These artefacts are purposefully placed in environments where the researcher anticipates an encounter and response from the potential research participant. The choice of design and placement of artefacts is specific and intentional. DESCRIPTION: Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. This case-study assesses the application of ‘Guerrilla Research Tactics’ [GRT] Methodology as an alternative, engaging and interactive method of data acquisition for higher degree research. Extending Gauntlett’s definition of ‘new creative methods… an alternative to language driven qualitative research methods' (2007), this case-study contributes to the existing body of literature addressing creative and interactive approaches to HDR data collection. The case-study was undertaken with Masters of Architecture and Urban Design research students at QUT, in 2012. Typically students within these creative disciplines view research as a taxing and boring process, distracting them from their studio design focus. An obstacle that many students face, is acquiring data from their intended participant groups. In response to these challenges the authors worked with students to develop creative, fun, and engaging research methods for both the students and their research participants. GRT are influenced by and developed from a combination of participatory action research (Kindon, 2008) and unobtrusive research methods (Kellehear, 1993), to enhance social research. GRT takes un-obtrusive research in a new direction, beyond the typical social research methods. The Masters research students developed alternative methods for acquiring data, which relied on a combination of analogue design interventions and online platforms commonly distributed through social networks. They identified critical issues that required action by the community, and the processes they developed focused on engaging with communities, to propose solutions. Key characteristics shared between both GRT and Guerrilla Activism, are notions of political issues, the unexpected, the unconventional, and being interactive, unique and thought provoking. The trend of Guerrilla Activism has been adapted to: marketing, communication, gardening, craftivism, theatre, poetry, and art. Focusing on the action element and examining elements of current trends within Guerrilla marketing, we believe that GRT can be applied to a range of research areas within various academic disciplines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores what we are calling “Guerrilla Research Tactics” (GRT): research methods that exploit emerging mobile and cloud based digital technologies. We examine some case studies in the use of this technology to generate research data directly from the physical fabric and the people of the city. We argue that GRT is a new and novel way of engaging public participation in urban, place based research because it facilitates the co- creation of knowledge, with city inhabitants, ‘on the fly’. This paper discusses the potential of these new research techniques and what they have to offer researchers operating in the creative disciplines and beyond. This work builds on and extends Gauntlett’s “new creative methods” (2007) and contributes to the existing body of literature addressing creative and interactive approaches to data collection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ineffectiveness of current design processes has been well studied and has resulted in widespread calls for the evolution and development of new management processes. Even following the advent of BIM, we continue to move from one stage to another without necessarily having resolved all the issues. CAD design technology, if well handled, could have significantly raised the level of quality and efficiency of current processes, but in practice this was not fully realized. Therefore, technology alone can´t solve all the problems and the advent of BIM could result in a similar bottleneck. For a precise definition of the problem to be solved we should start by understanding what are the main current bottlenecks that have yet to be overcome by either new technologies or management processes, and the impact of human behaviour-related issues which impact the adoption and utilization of new technologies. The fragmented and dispersed nature of the AEC sector, and the huge number of small organizations that comprise it, are a major limiting factor. Several authors have addressed this issue and more recently IDDS has been defined as the highest level of achievement. However, what is written on IDDS shows an extremely ideal situation on a state to be achieved; it shows a holistic utopian proposition with the intent to create the research agenda to move towards that state. Key to IDDS is the framing of a new management model which should address the problems associated with key aspects: technology, processes, policies and people. One of the primary areas to be further studied is the process of collaborative work and understanding, together with the development of proposals to overcome the many cultural barriers that currently exist and impede the advance of new management methods. The purpose of this paper is to define and delimit problems to be solved so that it is possible to implement a new management model for a collaborative design process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the past decade the mitochondrial (mt) genome has become the most widely used genomic resource available for systematic entomology. While the availability of other types of ‘–omics’ data – in particular transcriptomes – is increasing rapidly, mt genomes are still vastly cheaper to sequence and are far less demanding of high quality templates. Furthermore, almost all other ‘–omics’ approaches also sequence the mt genome, and so it can form a bridge between legacy and contemporary datasets. Mitochondrial genomes have now been sequenced for all insect orders, and in many instances representatives of each major lineage within orders (suborders, series or superfamilies depending on the group). They have also been applied to systematic questions at all taxonomic scales from resolving interordinal relationships (e.g. Cameron et al., 2009; Wan et al., 2012; Wang et al., 2012), through many intraordinal (e.g. Dowton et al., 2009; Timmermans et al., 2010; Zhao et al. 2013a) and family-level studies (e.g. Nelson et al., 2012; Zhao et al., 2013b) to population/biogeographic studies (e.g. Ma et al., 2012). Methodological issues around the use of mt genomes in insect phylogenetic analyses and the empirical results found to date have recently been reviewed by Cameron (2014), yet the technical aspects of sequencing and annotating mt genomes were not covered. Most papers which generate new mt genome report their methods in a simplified form which can be difficult to replicate without specific knowledge of the field. Published studies utilize a sufficiently wide range of approaches, usually without justification for the one chosen, that confusion about commonly used jargon such as ‘long PCR’ and ‘primer walking’ could be a serious barrier to entry. Furthermore, sequenced mt genomes have been annotated (gene locations defined) to wildly varying standards and improving data quality through consistent annotation procedures will benefit all downstream users of these datasets. The aims of this review are therefore to: 1. Describe in detail the various sequencing methods used on insect mt genomes; 2. Explore the strengths/weakness of different approaches; 3. Outline the procedures and software used for insect mt genome annotation, and; 4. Highlight quality control steps used for new annotations, and to improve the re-annotation of previously sequenced mt genomes used in systematic or comparative research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to investigate boarding times for Boeing 777 and Airbus 380 aircraft. We also introduce three new boarding methods to find the optimum boarding strategy. Our models explicitly simulate the behaviour of groups of people travelling together and we explicitly simulate the timing to store their luggage as part of the boarding process. Results from the simulation demonstrates the Reverse Pyramid method is the best boarding method for Boeing 777, and the Steffen method is the best boarding method for Airbus 380. For the new suggested boarding methods, aisle first boarding method is the best boarding strategy for Boeing 777 and row arrangement method is the best boarding strategy for Airbus 380. Overall best boarding strategy is aisle first boarding method for Boeing 777 and Steffen method for Airbus 380.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: To estimate the colonoscopy burden of introducing population screening for colorectal cancer in New Zealand. Methods: Screening for colorectal cancer using biennial immunochemical faecal occult blood tests offered to people aged 50-74 years of age was modelled using population estimates from Statistics New Zealand for 2011-2031. Modelling to determine colonoscopy requirements was based on participation and test positivity rates from published results of screening programmes. Estimates of the number of procedures required for ongoing adenoma surveillance were calculated using screening literature results of adenoma yield, and New Zealand Guidelines for Adenoma Surveillance. Sensitivity analysis was undertaken on key parameters. Results: For a test positivity of 6.4%, biennial screening using immunochemical faecal occult blood testing with a 60% participation rate, would require 18,000 colonoscopies nationally, increasing to 28,000 by 2031. The majority of procedures are direct referrals from a positive FOBT, with surveillance colonoscopy numbers building over time. Conclusion: Colonoscopy requirements for immunochemical faecal occult blood based population screening for colorectal cancer are high. Significant expansion of services is required and careful management of surveillance procedures to ensure timely delivery of initial colonoscopies whilst maintaining symptomatic services. A model re-run informed by data from the screening pilot will allow improved estimates for the New Zealand setting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Informed consent is the legal requirement to educate a patient about a proposed medical treatment or procedure so that he or she can make informed decisions. The purpose of the study was to examine the current practice for obtaining informed consent for third molar tooth extractions (wisdom teeth) by Oral and Maxillofacial Surgeons in Australia and New Zealand. Methods An online survey was sent to 180 consultant Oral and Maxillofacial Surgeons in Australia and New Zealand. Surgeons were asked to answer (yes/no) whether they routinely warned of a specific risk of third molar tooth extraction in their written consent. Results 71 replies were received (39%). The only risks that surgeons agreed should be routinely included in written consent were a general warning of infection (not alveolar osteitis), inferior alveolar nerve damage (temporary and permanent) and lingual nerve damage (temporary and permanent). Conclusions There is significant variability among Australian and New Zealand Oral and Maxillofacial Surgeons regarding risk disclosure for third molar tooth extractions. We aim to improve consistency in consent for third molar extractions by developing an evidence-based consent form.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal topic: Effectuation theory suggests that entrepreneurs develop their new ventures in an iterative way by selecting possibilities through flexibility and interactions with the market; a focus on affordability of loss rather than maximal return on the capital invested, and the development of pre-commitments and alliances from stakeholders (Sarasvathy, 2001, 2008; Sarasvathy et al., 2005, 2006). In contrast, causation may be described as a rationalistic reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. However, little is known about the consequences of following either of these two processes. One aspect that remains unclear is the relationship between newness and effectuation. On one hand it can be argued that the combination of a means-centered, interactive (through pre-commitments and alliances with stakeholders from the early phases of the venture creation) and open-minded process (through flexibility of exploiting contingencies) should encourage and facilitate the development of innovative solutions. On the other hand, having a close relationship with their “future first customers” and focussing too much on the resources and knowledge already within the firm may be a constraint that is not conducive to innovation, or at least not to a radical innovation. While it has been suggested that effectuation strategy is more likely to be used by innovative entrepreneurs (Sarasvathy, 2001), this hypothesis has not been demonstrated yet (Sarasvathy, 2001). Method: In our attempt to capture newness in its different aspects we have considered the following four domains where newness may happen: new product/service; new method for promotion and sales; new production methods/sourcing; market creation. We identified how effectuation may be differently associated with these four domains of newness. To test our four sets of hypotheses a dataset of 1329 firms (702 nascent and 627 young firms) randomly selected in Australia was examined through ANOVA Tukey HSD Test. Results and Implications: Results indicate the existence of a curvilinear relationship between effectuation and newness where low and high levels of newness are associated with low level of effectuation while medium level of newness is associated with high level of effectuation. Implications for academia, practitioners and policy makers are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal Topic The study of the origin and characteristics of venture ideas - or ''opportunities'' as they are often called - and their contextual fit are key research goals in entrepreneurship (Davidsson, 2004). We define venture idea as ''the core ideas of an entrepreneur about what to sell, how to sell, whom to sell and how an entrepreneur acquire or produce the product or service which he/she sells'' for the purpose of this study. When realized the venture idea becomes a ''business model''. Even though venture ideas are central to entrepreneurship yet its characteristics and their effect to the entrepreneurial process is mysterious. According to Schumpeter (1934) entrepreneurs could creatively destruct the existing market condition by introducing new product/service, new production methods, new markets, and new sources of supply and reorganization of industries. The introduction, development and use of new ideas are generally called as ''innovation'' (Damanpour & Wischnevsky, 2006) and ''newness'' is a property of innovation and is a relative term which means that the degree of unfamiliarity of venture idea either to a firm or to a market. However Schumpeter's (1934) discusses five different types of newness, indicating that type of newness is an important issue. More recently, Shane and Venkataraman (2000) called for research taking into consideration not only the variation of characteristics of individuals but also heterogeneity of venture ideas, Empirically, Samuelson (2001, 2004) investigated process differences between innovative venture ideas and imitative venture ideas. However, he used only a crude dichotomy regarding the venture idea newness. According to Davidsson, (2004) as entrepreneurs could introduce new economic activities ranging from pure imitation to being new to the entire world market, highlighting that newness is a matter of degree. Dahlqvist (2007) examined the venture idea newness and made and attempt at more refined assessment of the degree and type of newness of venture idea. Building on these predecessors our study refines the assessment of venture idea newness by measuring the degree of venture idea newness (new to the world, new to the market, substantially improved while not entirely new, and imitation) for four different types of newness (product/service, method of production, method of promotion, and customer/target market). We then related type and degree of newness to the pace of progress in nascent venturing process. We hypothesize that newness will slow down the business creation process. Shane & Venkataraman (2000) introduced entrepreneurship as the nexus of opportunities and individuals. In line with this some scholars has investigated the relationship between individuals and opportunities. For example Shane (2000) investigates the relatedness between individuals' prior knowledge and identification of opportunities. Shepherd & DeTinne (2005) identified that there is a positive relationship between potential financial reward and the identification of innovative venture ideas. Sarasvathy's 'Effectuation Theory'' assumes high degree of relatedness with founders' skills, knowledge and resources in the selection of venture ideas. However entrepreneurship literature is scant with analyses of how this relatedness affects to the progress of venturing process. Therefore, we assess the venture ideas' degree of relatedness to prior knowledge and resources, and relate these, too, to the pace of progress in nascent venturing process. We hypothesize that relatedness will increase the speed of business creation. Methodology For this study we will compare early findings from data collected through the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). CAUSEE is a longitudinal study whose primary objective is to uncover the factors that initiate, hinder and facilitate the process of emergence and development of new firms. Data were collected from a representative sample of some 30,000 households in Australia using random digit dialing (RDD) telephone survey interviews. Through the first round of data collection identified 600 entrepreneurs who are currently involved in the business start-up process. The unit of the analysis is the emerging venture, with the respondent acting as its spokesperson. The study methodology allows researchers to identify ventures in early stages of creation and to longitudinally follow their progression through data collection periods over time. Our measures of newness build on previous work by Dahlqvist (2007). Our adapted version was developed over two pre-tests with about 80 participants in each. The measures of relatedness were developed through the two rounds of pre-testing. The pace of progress in the venture creation process is assessed with the help of time-stamped gestation activities; a technique developed in the Panel Study of Entrepreneurial Dynamics (PSED). Results and Implications We hypothesized that venture idea newness slows down the venturing process whereas relatedness facilitates the venturing process. Results of 600 nascent entrepreneurs in Australia indicated that there is marginal support for the hypothesis that relatedness assists the gestation progress. Newness is significant but is the opposite sign to the hypothesized. The results give number of implications for researchers, business founders, consultants and policy makers in terms of better knowledge of the venture creation process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Buffer overflow vulnerabilities continue to prevail and the sophistication of attacks targeting these vulnerabilities is continuously increasing. As a successful attack of this type has the potential to completely compromise the integrity of the targeted host, early detection is vital. This thesis examines generic approaches for detecting executable payload attacks, without prior knowledge of the implementation of the attack, in such a way that new and previously unseen attacks are detectable. Executable payloads are analysed in detail for attacks targeting the Linux and Windows operating systems executing on an Intel IA-32 architecture. The execution flow of attack payloads are analysed and a generic model of execution is examined. A novel classification scheme for executable attack payloads is presented which allows for characterisation of executable payloads and facilitates vulnerability and threat assessments, and intrusion detection capability assessments for intrusion detection systems. An intrusion detection capability assessment may be utilised to determine whether or not a deployed system is able to detect a specific attack and to identify requirements for intrusion detection functionality for the development of new detection methods. Two novel detection methods are presented capable of detecting new and previously unseen executable attack payloads. The detection methods are capable of identifying and enumerating the executable payload’s interactions with the operating system on the targeted host at the time of compromise. The detection methods are further validated using real world data including executable payload attacks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUCTION: Queensland University of Technology (QUT) Library is partnering with High Performance Computing (HPC) services and the Division of Research and Commercialisation to develop and deliver a range of integrated research support services and systems designed to enhance the research capabilities of the University. Existing and developing research support services include - support for publishing strategies including open access, bibliographic citation and ranking services, research data management, use of online collaboration tools, online survey tools, quantitative and qualitative data analysis, content management and storage solutions. In order to deliver timely and effective research referral and support services, it is imperative that library staff maintain their awareness of, and develop expertise in new eResearch methods and technologies. ---------- METHODS: In 2009/10 QUT Library initiated an online survey for support staff and researchers and a series of focus groups for researchers aimed at gaining a better understanding of current and future eresearch practices and skills. These would better inform the development of a research skills training program and the development of new research support services. The Library and HPC also implemented a program of seminars and workshops designed to introduce key library staff to a broad range of eresearch concepts and technologies. Feedback was obtained after each training session. A number of new services were implemented throughout 2009 and 2010. ---------- RESULTS: Key findings of the survey and focus groups are related to the development of the staff development program. Feedback from program attendees is provided and evaluated. The staff development program is assessed in terms of its success to support the implementation of new research support services. --------- CONCLUSIONS QUT Library has embarked on an ambitious awareness and skills development program to assist Library staff transition a period of rapid change and broadening scope for the Library. Successes and challenges of the program are discussed. A number of recommendations are made in retrospect and also looking forward to the future training needs of Library staff to support the University’s future research goals.